Goto

Collaborating Authors

 Knox County


Exploring Explanations Improves the Robustness of In-Context Learning

Honda, Ukyo, Oka, Tatsushi

arXiv.org Artificial Intelligence

In-context learning (ICL) has emerged as a successful paradigm for leveraging large language models (LLMs). However, it often struggles to generalize beyond the distribution of the provided demonstrations. A recent advancement in enhancing robustness is ICL with explanations (X-ICL), which improves prediction reliability by guiding LLMs to understand and articulate the reasoning behind correct labels. Building on this approach, we introduce an advanced framework that extends X-ICL by systematically exploring explanations for all possible labels (X$^2$-ICL), thereby enabling more comprehensive and robust decision-making. Experimental results on multiple natural language understanding datasets validate the effectiveness of X$^2$-ICL, demonstrating significantly improved robustness to out-of-distribution data compared to the existing ICL approaches.


"Martyr!" Plays Its Subject for Laughs but Is Also Deadly Serious

The New Yorker

A novel with the title "Martyr!" arrives on the scene preloaded and explosive. The word is fraught, even more so now than when the book's author, the Iranian American poet Kaveh Akbar, chose it. It signals that Akbar is fascinated with words in action, words that someone has reached for in a state of excitation, like joy or deep grief. The shouter of "Martyr!" bears something within him which he is determined to force the word to express. But the title's punctuation ironizes or undercuts this intention, as if to suggest that language signifies in ways that are impossible to control.


Why the U.S. Is Backing Killer Robots

#artificialintelligence

As the power of artificial intelligence grows, the likelihood of a future war filled with killer robots grows as well. Proponents suggest that lethal autonomous weapon systems (LAWs) might cause less "collateral damage," while critics warn that giving machines the power of life and death would be a terrible mistake. Last month's UN meeting on'killer robots' in Geneva ended with victory for the machines, as a small number of countries blocked progress towards an international ban. Some opponents of such a ban, like Russia and Israel, were to be expected since both nations already have advanced military AI programs. But surprisingly, the U.S. also agreed with them.


Introduction to the Symposium on AI and the Mitigation of Human Error

Mittu, Ranjeev (Naval Research Laboratory) | Taylor, Gavin (US Naval Academy) | Sofge, Don (Naval Research Laboratory) | Lawless, W. F. (Paine College)

AAAI Conferences

However, foundational problems remain in the either mindfully or inadvertently by individuals or teams of continuing development of AI for team autonomy, humans. One worry about this bright future is that jobs especially with objective measures able to optimize team may be lost; from Mims (2015), function, performance and composition. Something potentially momentous is happening inside AI approaches often attempt to address autonomy by startups, and it's a practice that many of their established modeling aspects of human decision-making or behavior.